artificial intelligence model
PSA-VLM: Enhancing Vision-Language Model Safety through Progressive Concept-Bottleneck-Driven Alignment
Liu, Zhendong, Nie, Yuanbi, Tan, Yingshui, Liu, Jiaheng, Yue, Xiangyu, Cui, Qiushi, Wang, Chongjun, Zhu, Xiaoyong, Zheng, Bo
Benefiting from the powerful capabilities of Large Language Models (LLMs), pre-trained visual encoder models connected to LLMs form Vision Language Models (VLMs). However, recent research shows that the visual modality in VLMs is highly vulnerable, allowing attackers to bypass safety alignment in LLMs through visually transmitted content, launching harmful attacks. To address this challenge, we propose a progressive concept-based alignment strategy, PSA-VLM, which incorporates safety modules as concept bottlenecks to enhance visual modality safety alignment. By aligning model predictions with specific safety concepts, we improve defenses against risky images, enhancing explainability and controllability while minimally impacting general performance. Our method is obtained through two-stage training. The low computational cost of the first stage brings very effective performance improvement, and the fine-tuning of the language model in the second stage further improves the safety performance. Our method achieves state-of-the-art results on popular VLM safety benchmark.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (3 more...)
- Information Technology > Security & Privacy (1.00)
- Law (0.68)
Thirteen proteins in your blood could reveal the age of your brain
Researchers trained an artificial intelligence model to gauge people's ages from their brain scans The abundance of 13 proteins in your blood seems to be a strong indicator of how rapidly your brain is ageing. This suggests that blood tests could one day help people track and even boost their brain health. Most previous studies that have looked at protein markers of brain ageing in the blood have involved fewer than 1000 people, says Nicholas Seyfried at Emory University in Atlanta, Georgia, who wasn't involved in the new research. To get a broader idea of the impact of these proteins, Wei-Shi Liu at Fudan University in China and his colleagues analysed MRI brain scan data from nearly 11,000 adults from the UK Biobank project, whose ages ranged from around 50 to 80 at the time of imaging. Using data from 70 per cent of the participants, Liu's team trained an artificial intelligence model to predict how old the participants were based on features of the brain images, such as the size of different brain regions and how distinct parts connected to each other.
- North America > United States > Georgia > Fulton County > Atlanta (0.26)
- Asia > China (0.26)
Use of What-if Scenarios to Help Explain Artificial Intelligence Models for Neonatal Health
Mamun, Abdullah, Devoe, Lawrence D., Evans, Mark I., Britt, David W., Klein-Seetharaman, Judith, Ghasemzadeh, Hassan
Early detection of intrapartum risk enables interventions to potentially prevent or mitigate adverse labor outcomes such as cerebral palsy. Currently, there is no accurate automated system to predict such events to assist with clinical decision-making. To fill this gap, we propose "Artificial Intelligence (AI) for Modeling and Explaining Neonatal Health" (AIMEN), a deep learning framework that not only predicts adverse labor outcomes from maternal, fetal, obstetrical, and intrapartum risk factors but also provides the model's reasoning behind the predictions made. The latter can provide insights into what modifications in the input variables of the model could have changed the predicted outcome. We address the challenges of imbalance and small datasets by synthesizing additional training data using Adaptive Synthetic Sampling (ADASYN) and Conditional Tabular Generative Adversarial Networks (CTGAN). AIMEN uses an ensemble of fully-connected neural networks as the backbone for its classification with the data augmentation supported by either ADASYN or CTGAN. AIMEN, supported by CTGAN, outperforms AIMEN supported by ADASYN in classification. AIMEN can predict a high risk for adverse labor outcomes with an average F1 score of 0.784. It also provides counterfactual explanations that can be achieved by changing 2 to 3 attributes on average. Resources available: https://github.com/ab9mamun/AIMEN.
- North America > United States > Georgia > Richmond County > Augusta (0.14)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
AIs can work together in much larger groups than humans ever could
We can struggle to maintain working relationships when our social group grows too large, but it seems that artificial intelligence models may not face the same limitation, suggesting thousands of AIs could work together to solve problems that humans can't. The idea that there is a fundamental limit on how many people we can interact with dates back to the 1990s, when anthropologist Robin Dunbar noticed a link between the size of a primate's brain and the typical size of its social group.
Why Protesters Around the World Are Demanding a Pause on AI Development
Just one week before the world's second-ever global summit on artificial intelligence, protesters of a small but growing movement called "Pause AI" demanded that the world's governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday. In London, a group of 20 or so protesters stood outside of the U.K.'s Department of Science, Innovation and Technology chanting things like "stop the race, it's not safe" and "who's future? The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI's Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world. "[AI companies] have proven time and time again… through the way that these companies' workers are treated, with the way that they treat other people's work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted," said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. "I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically," she says. "I love writing personally… I've really loved it.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.92)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.71)
The Pursuit of Fairness in Artificial Intelligence Models: A Survey
Kheya, Tahsin Alamgir, Bouadjenek, Mohamed Reda, Aryal, Sunil
Artificial Intelligence (AI) models are now being utilized in all facets of our lives such as healthcare, education and employment. Since they are used in numerous sensitive environments and make decisions that can be life altering, potential biased outcomes are a pressing matter. Developers should ensure that such models don't manifest any unexpected discriminatory practices like partiality for certain genders, ethnicities or disabled people. With the ubiquitous dissemination of AI systems, researchers and practitioners are becoming more aware of unfair models and are bound to mitigate bias in them. Significant research has been conducted in addressing such issues to ensure models don't intentionally or unintentionally perpetuate bias. This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems. We explore the different definitions of fairness existing in the current literature. We create a comprehensive taxonomy by categorizing different types of bias and investigate cases of biased AI in different application domains. A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models. Moreover, we also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models. We hope this survey helps researchers and practitioners understand the intricate details of fairness and bias in AI systems. By sharing this thorough survey, we aim to promote additional discourse in the domain of equitable and responsible AI.
- North America > United States > New York > New York County > New York City (0.14)
- Europe > Austria > Vienna (0.14)
- Oceania > Australia > Victoria (0.04)
- (29 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.45)
- Media (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Services (1.00)
- (7 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
Opportunities and challenges in the application of large artificial intelligence models in radiology
Pan, Liangrui, Zhao, Zhenyu, Lu, Ying, Tang, Kewei, Fu, Liyong, Liang, Qingchun, Peng, Shaoliang
Influenced by ChatGPT, artificial intelligence (AI) large models have witnessed a global upsurge in large model research and development. As people enjoy the convenience by this AI large model, more and more large models in subdivided fields are gradually being proposed, especially large models in radiology imaging field. This article first introduces the development history of large models, technical details, workflow, working principles of multimodal large models and working principles of video generation large models. Secondly, we summarize the latest research progress of AI large models in radiology education, radiology report generation, applications of unimodal and multimodal radiology. Finally, this paper also summarizes some of the challenges of large AI models in radiology, with the aim of better promoting the rapid revolution in the field of radiography.
- North America > United States (0.46)
- Asia > China > Hunan Province (0.14)
- Asia > China > Shanghai > Shanghai (0.04)
- (6 more...)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Information loss from dimensionality reduction in 5D-Gaussian spectral data
Understanding the loss of information in spectral analytics is a crucial first step towards finding root causes for failures and uncertainties using spectral data in artificial intelligence models built from modern complex data science applications. Here, we show from an elementary Shannon entropy model analysis with quantum statistics of Gaussian distributed spectral data, that the relative loss of information from dimensionality reduction due to the projection of an initial five-dimensional dataset onto two-dimensional diagrams is less than one percent in the parameter range of small data sets with sample sizes on the order of few hundred data samples. From our analysis, we also conclude that the density and expectation value of the entropy probability distribution increases with the sample number and sample size using artificial data models derived from random sampling Monte Carlo simulation methods.
NoxTrader: LSTM-Based Stock Return Momentum Prediction for Quantitative Trading
Liu, Hsiang-Hui, Shu, Han-Jay, Chiu, Wei-Ning
We introduce NoxTrader, a sophisticated system designed for portfolio construction and trading execution with the primary objective of achieving profitable outcomes in the stock market, specifically aiming to generate moderate to long-term profits. The underlying learning process of NoxTrader is rooted in the assimilation of valuable insights derived from historical trading data, particularly focusing on time-series analysis due to the nature of the dataset employed. In our approach, we utilize price and volume data of US stock market for feature engineering to generate effective features, including Return Momentum, Week Price Momentum, and Month Price Momentum. We choose the Long Short-Term Memory (LSTM)model to capture continuous price trends and implement dynamic model updates during the trading execution process, enabling the model to continuously adapt to the current market trends. Notably, we have developed a comprehensive trading backtesting system - NoxTrader, which allows us to manage portfolios based on predictive scores and utilize custom evaluation metrics to conduct a thorough assessment of our trading performance. Our rigorous feature engineering and careful selection of prediction targets enable us to generate prediction data with an impressive correlation range between 0.65 and 0.75. Finally, we monitor the dispersion of our prediction data and perform a comparative analysis against actual market data. Through the use of filtering techniques, we improved the initial -60% investment return to 325%.
- Asia > Taiwan (0.05)
- Asia > Georgia > Tbilisi > Tbilisi (0.05)
- North America > United States > Connecticut > Fairfield County > Stamford (0.04)
'Rocky' star Dolph Lundgren has high hopes for AI's use in cancer research
The'America's Got Talent' judge told Fox News Digital why he doesn't like AI technology in songwriting. "Rocky IV" star Dolph Lundgren is looking forward to seeing artificial intelligence used in the medical field. "AI I'm sure will be extremely useful," Lundgren told Fox News Digital. He continued, "They used it to find that COVID vaccine so quickly. And they've actually taken those same algorithms and used it in cancer research [I know] because I'm a little bit involved in that now. Earlier this year, CNBC reported that Moderna had partnered with IBM to use generative AI and quantum computing to advance mRNA technology, a key component in the company's COVID-19 vaccine. The agreement could help speed up Moderna's work on new vaccines and therapies. Last month, Insilico Medicine, an AI-driven biotech company based in Hong Kong and New York City, recently announced that its new AI-designed drug for COVID-19 has entered Phase I clinical trials, and could become an alternative to current antivirals like Paxlovid and Lagevrio. WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Dolph Lundgren hopes AI "will be extremely useful" when it comes to cancer research. Lundgren has been battling cancer since 2015 after doctors found a tumor in his kidney. He underwent surgery and was doing well until 2020, when additional tumors were discovered. In 2021, he was told one tumor had grown to the "size of a lemon" and was inoperable, but he later sought a second opinion, and alternative treatments were made available to the actor. Lundgren told Fox News Digital that he is now doing well, though still facing a somewhat tough road. "Last time I checked, you know, everything was good," he said. "There is no cancer, and they check the blood for cancer cells." Dolph Lundgren told Fox News Digital that he is doing well with his cancer treatments and lives "a normal life now." He continued, "I think I'll always have to be on some kind of treatment and be aware of this, but I live a normal life now.
- North America > United States > New York (0.25)
- Asia > China > Hong Kong (0.25)
- North America > United States > Massachusetts (0.07)
- Research Report > Experimental Study (0.92)
- Research Report > Strength Medium (0.56)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)